hysop.operator.gradient module¶
@file gradient.py Gradient: compute dFi/dXj for a given field, up to all components in all directions. MinMaxGradientStatistics: compute min(dFi/dXj), max(dFi/dXj) and/or max(|dFi/dXj|)
for a given field, up to all components in all directions.
- class hysop.operator.gradient.Gradient(F, gradF, directions=None, implementation=None, cls=<class 'hysop.operator.derivative.FiniteDifferencesSpaceDerivative'>, base_kwds=None, **kwds)[source]¶
Bases:
MultiSpaceDerivatives
Generate multiple SpaceDerivative operators to compute the gradient of a Field.
Create an operator generator that yields a sequence of operators that compute the gradient of an input field F.
Given F, a scalar, vector or tensor field of dimension n, compute the field of dimension n+1 that is the gradient of F:
ScalarField: F -> gradF[j] = dF/dxj VectorField: F -> gradF[i,j] = dFi/dxj TensorField: F -> gradF[i0,…,in,j] = dF[i0,…,in]/dxj
Derivatives can be computed with respect to specific directions and not necessarily in all directions. To restrict the number of components, take a tensor view on F (and gradF).
- Example: if F is a VectorField of m components (F0, …, Fm) in a domain of dimension n,
this operator will compute gradF[i,j] = dF[i]/dx[j].
( dF0/dx0 … dF0/dxn ) ( . . . )
- grad(F) = ( . . . )
( . . . ) ( dFm/dx0 … dFm/dxn )
- where F is an input field
gradF is an output field != F 0 <= i < nb_components 0 <= j < nb_directions nb_components = F.nb_components nb_directions = min( F.dim, len(directions))
- Parameters:
F (hysop.field.continuous_field.Field) – Continuous field as input (Scalar, Vector or TensorField). All contained field have to live on the same domain.
gradF (hysop.field.continuous_field.Field) – Continuous field to be written, should have exactly shape F.shape + (ndirections,)
directions (tuple of ints, optional) – The directions in which to take the derivatives. Defaults to range(F.dim). nb_directions = min(F.dim, len(directions))
implementation (Implementation, optional, defaults to None) – target implementation, should be contained in available_implementations(). If None, implementation will be set to default_implementation().
kwds (dict, optional) – Extra parameters passed towards base class (MultiSpaceDerivatives).
- class hysop.operator.gradient.MinMaxGradientStatistics(F, gradF=None, directions=None, coeffs=None, Fmin=None, Fmax=None, Finf=None, all_quiet=True, print_tensors=True, name=None, pretty_name=None, pbasename=None, ppbasename=None, variables=None, implementation=None, base_kwds=None, cls=<class 'hysop.operator.min_max.MinMaxFiniteDifferencesDerivativeStatistics'>, **kwds)[source]¶
Bases:
Gradient
Interface for computing some statistics on the gradient of a field (minimum and maximum) one component at a time to limit memory usage. This will generate multiple MinMaxDerivativeStatistics operators.
Create an operator generator that yields a sequence of operators that compute statistics on the gradient of an input field F.
- MinMaxGradientStatistics can compute some commonly used Field statistics:
Fmin: component-wise and direction-wise min values of the gradient of the field. Fmax: component-wise and direction-wise max values of the gradient of the field. Finf: component-wise and direction-wise max values of the absolute value of the
gradient of the field (computed using Fmin and Fmax).
Derivatives can be computed with respect to specific directions and not necessarily in all directions. To restrict the number of components, take a tensor view on F (and gradF).
Let k = idx + (j,) gradF[k] = dF[idx]/dXd
Fmax[k] = Smin * min( dF[idx]/dXd ) Fmin[k] = Smax * max( dF[idx]/dXd ) Finf[k] = Sinf * max( |Fmin[k]|, |Fmax[k]| )
- where F is an input field,
nb_components = F.nb_components = np.prod(F.shape) nb_directions = min( F.dim, len(directions)) gradF is an optional output tensor field, idx is contained in numpy.ndindex(*F.shape) 0 <= j < nb_directions d = directions[j] Fmin = created or supplied TensorParameter of shape F.shape + (nb_directions,). Fmax = created or supplied TensorParameter of shape F.shape + (nb_directions,). Finf = created or supplied TensorParameter of shape F.shape + (nb_directions,). Smin = coeffs[‘Fmin’] or coeffs[‘Fmin’][k] Smax = coeffs[‘Fmax’] or coeffs[‘Fmax’][k] Sinf = coeffs[‘Finf’] or coeffs[‘Fmax’][k]
- All statistics are only computed if explicitely required by user,
unless required to compute another required statistic, see Notes.
- Parameters:
F (Field) – The continuous input field on which the gradient will be taken and statistics will be computed.
gradF (Field, optional) – Optional output field for the gradient. If the gradient is required as an output, one can also use MinMaxStatistics on a precomputed gradient (using the Gradient operator) instead of MinMaxGradientStatistics.
directions (array like of ints, optional) – The directions in which the statistics are computed, defaults to all directions (ie. range(F.dim)).
coeffs (dict of scalar or array like of coefficients, optional) – Optional scaling of the statistics. Scaling factor should be a scalar or an array-like of scalars for each components of gradF. If not given default to 1 for all statistics.
F... (TensorParameter or boolean, optional) – At least one statistic should be specified (either by boolean or TensorParameter). TensorParameters should be of shape F.shape + (nb_directions,), see Notes. If set to True, the TensorParameter will be generated automatically. Autogenerated TensorParameters that are not required by the user but are generated anyway are set to be quiet. All autogenerated TensorParameters can be retrieved as attributes of this object.
all_quiet (bool, optional, defaults to True) – Set all generated params to be quiet, even the ones that are requested explicitely.
print_tensors (bool, optional, defaults to True) – Should the phony operator print the tensor parameters during apply ?
name (str, optional) – Name template for generated operator names. Defaults to MinMax({}) where {} will be replaced by gradF[k].name.
pretty_name (str, optional) – Name template for generaed operatr pretty names. Defaults to |+/-{}| where {} will be replaced by gradF[k].pretty_name.
pbasename (str, optional) – Basename for created tensor parameters. Defaults to gradF.name.
ppbasename (str, optional) – Pretty basename for created tensor parameters. Defaults to gradF.pretty_name.
variables (dict) – Dictionary of fields as keys and topologies as values.
implementation (hysop.constants.Implementation, optional) –
Specify generated operator underlying backend implementation. Target implementation, should be contained in
MinMaxDerivativeStatistics.available_implementations().
- If None, implementation will be set to
MinMaxDerivativeStatistics.default_implementation().
base_kwds (dict) – Base class keyword arguments.
kwds (dict) – Additional kwds passed to chosen implementation.
Attributes
-----------
Fmin (TensorParameter) – All generated tensor parameters. Unused statistics are set to None.
Fmax (TensorParameter) – All generated tensor parameters. Unused statistics are set to None.
Finf (TensorParameter) – All generated tensor parameters. Unused statistics are set to None.
Notes
nb_components = F.nb_components nb_directions = min(F.dim, len(directions)).
- About statistics:
- Finf requires to compute Fmin and Fmax.
Finf = Sinf * max( abs(Smin*Fmin), abs(Smax*Fmax))
where Sinf, Smin and Smax are the scaling coefficients defined in coeffs.